Web Survey Bibliography
Survey research has historically relied on a probabilistic model to underlie its sampling frame. With rare exception online research is non
‐probabilistic. Research without the safety net of a probabilistic frame raises all kinds of alarms. Challenges as to the reliability of online research has become a growing crescendo as the ‐probabilistic nature of online research has become evident. However, not all sampling frames must be probabilistic. Unfortunately, no such standard metrics exist to track reliability in online sampling. In fact, whether they are access panels or social networks there are no standardized means of balancing panels or even comparing them. To confound the situation the commercially used convenience panels are vastly different from each other (Gittelman and Trimarchi, CASRO Panel Conference, February 2009, paper available). These differences are so far reaching that those who elect to use these sample sources are not only without a safety net, they are at considerable professional risk. We have completed analysis of eighteen American panels and have found that respondent aging, frequency of professional responders, other satisficing behaviors as well as dramatic differences between sociologic, psychographic and buying behavior segmentations make for a cacophony of differences seemingly impossible to correct. ‐panel comparisons themselves are rare with data from a very few having been presented on any scale. ‐liners, invalids, inconsistencies, etc.] for which we have developed standard quantitative measures, and (2) a mechanism for developing a family of sampling standards based upon segmentation by key variables such as, but not limited to, media, purchasing and psychographics. It is the new availability of global data that allows us to present universal standards that help us meet necessary requirements that are our focus in this conference. In addition, to measuring performance, we believe that there are three key requirements for standard panel metrics including: (1) the ability to capture panel performance variations consistent with the differing needs of sample users, (i.e. a broadcasting company might wish to anchor its sampling frame to media segments); (2) The ability to create a data base that is retrospective in that new sample sources can be added to the database without repeating the analysis and (3) a focus on indices that are pragmatic in their measure (i.e. We always view buying behavior as the most pragmatic.)
non
In this study we will present the results of an extensive global study covering forty countries. Within each country panels will be compared using a 17 minute questionnaire, 400 completes per panel. We hope to present five or more providers per market. No such extensive comparison has been done on a global basis. In fact, inter
Preliminary data (24000 interviews) shows evolutionary trends in convenience panel development. Between panel differences appear more extreme in the United States than in other markets.
We are proposing two sets of practices: (1) using panel performance metrics [professionals, speeders, straight
In this talk, we propose to use segmentation analysis as a new metric that will allow us to anchor online data in a new non probabilistic sampling frame. It is the existence of global data that gives us a rare opportunity to experiment with this new methodology. Our goal is to use segmentation in each country to create a fingerprint that can be consistently maintained by blending panels. By minimizing the variability from the segments through optimization and panel combination we will establish a means for stabilizing online data irrespective of the panels and sourcing modes from which they draw their origin. We cannot stabilize online data unless we provide it with a reference point to anchor itself; the segments are that anchor. As the sourcing models continue to shift, panels will age and shift with them; we need a reliable anchor that rises above these problems. It is essential that we explore tools to measure these changes. Without a means of comparison we cannot expect to measure drift nor can we expect to have a platform for predicting the future. We do not profess to be on the road to a new probabilistic framework but rather a platform for comparison and continuity. We believe that there is a theoretical population online that can serve this purpose. Using the database we have gathered that includes respondents from over 160 global panels (64,000 interviews) distributed among 40 global markets we shall introduce new methods to build “perspective”.
Based on this we will use our segmentation models as a means of creating a “convenience” sampling frame by averaging segments into a “Grand Mean.” Using optimization models we will select convenience panels that best reflect the grand mean and the proportions by which they best fit together. We shall give evidence for the efficiency of these strategies.
Conference homepage (abstract)
Web survey bibliography (4086)
- Displaying Videos in Web Surveys: Implications for Complete Viewing and Survey Responses; 2017; Mendelson, J.; Lee Gibson, J.; Romano Bergstrom, J. C.
- Using experts’ consensus (the Delphi method) to evaluate weighting techniques in web surveys not...; 2017; Toepoel, V.; Emerson, H.
- Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer; 2017; Hagan, T. L.; Belcher, S. M.; Donovan, H. S.
- Answering Without Reading: IMCs and Strong Satisficing in Online Surveys; 2017; Anduiza, E.; Galais, C.
- Ideal and maximum length for a web survey; 2017; Revilla, M.; Ochoa, C.
- Social desirability bias in self-reported well-being measures: evidence from an online survey; 2017; Caputo, A.
- Web-Based Survey Methodology; 2017; Wright, K. B.
- Handbook of Research Methods in Health Social Sciences; 2017; Liamputtong, P.
- Lessons from recruitment to an internet based survey for Degenerative Cervical Myelopathy: merits of...; 2017; Davies, B.; Kotter, M. R.
- Web Survey Gamification - Increasing Data Quality in Web Surveys by Using Game Design Elements; 2017; Schacht, S.; Keusch, F.; Bergmann, N.; Morana, S.
- Effects of sampling procedure on data quality in a web survey; 2017; Rimac, I.; Ogresta, J.
- Comparability of web and telephone surveys for the measurement of subjective well-being; 2017; Sarracino, F.; Riillo, C. F. A.; Mikucka, M.
- Achieving Strong Privacy in Online Survey; 2017; Zhou, Yo.; Zhou, Yi.; Chen, S.; Wu, S. S.
- A Meta-Analysis of the Effects of Incentives on Response Rate in Online Survey Studies; 2017; Mohammad Asire, A.
- Telephone versus Online Survey Modes for Election Studies: Comparing Canadian Public Opinion and Vote...; 2017; Breton, C.; Cutler, F.; Lachance, S.; Mierke-Zatwarnicki, A.
- Examining Factors Impacting Online Survey Response Ratesin Educational Research: Perceptions of Graduate...; 2017; Saleh, A.; Bista, K.
- Usability Testing for Survey Research; 2017; Geisen, E.; Romano Bergstrom, J. C.
- Paradata as an aide to questionnaire design: Improving quality and reducing burden; 2017; Timm, E.; Stewart, J.; Sidney, I.
- Fieldwork monitoring and managing with time-related paradata; 2017; Vandenplas, C.
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Interviewer Gender and Survey Responses: The Effects of Humanizing Cues Variations; 2017; Jablonski, W.; Krzewinska, A.; Grzeszkiewicz-Radulska, K.
- Millennials and emojis in Spain and Mexico.; 2017; Bosch Jover, O.; Revilla, M.
- Where, When, How and with What Do Panel Interviews Take Place and Is the Quality of Answers Affected...; 2017; Niebruegge, S.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance...; 2017; Wetzlehuetter, D.
- Do distractions during web survey completion affect data quality? Findings from a laboratory experiment...; 2017; Wenz, A.
- Predicting Breakoffs in Web Surveys; 2017; Mittereder, F.; West, B. T.
- Measuring Subjective Health and Life Satisfaction with U.S. Hispanics; 2017; Lee, S.; Davis, R.
- Humanizing Cues in Internet Surveys: Investigating Respondent Cognitive Processes; 2017; Jablonski, W.; Grzeszkiewicz-Radulska, K.; Krzewinska, A.
- A Comparison of Emerging Pretesting Methods for Evaluating “Modern” Surveys; 2017; Geisen, E., Murphy, J.
- The Effect of Respondent Commitment on Response Quality in Two Online Surveys; 2017; Cibelli Hibben, K.
- Pushing to web in the ISSP; 2017; Jonsdottir, G. A.; Dofradottir, A. G.; Einarsson, H. B.
- The 2016 Canadian Census: An Innovative Wave Collection Methodology to Maximize Self-Response and Internet...; 2017; Mathieu, P.
- Push2web or less is more? Experimental evidence from a mixed-mode population survey at the community...; 2017; Neumann, R.; Haeder, M.; Brust, O.; Dittrich, E.; von Hermanni, H.
- In search of best practices; 2017; Kappelhof, J. W. S.; Steijn, S.
- Redirected Inbound Call Sampling (RICS); A New Methodology ; 2017; Krotki, K.; Bobashev, G.; Levine, B.; Richards, S.
- An Empirical Process for Using Non-probability Survey for Inference; 2017; Tortora, R.; Iachan, R.
- The perils of non-probability sampling; 2017; Bethlehem, J.
- A Comparison of Two Nonprobability Samples with Probability Samples; 2017; Zack, E. S.; Kennedy, J. M.
- Rates, Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based...; 2017; Sebo, P.; Maisonneuve, H.; Cerutti, B.; Pascal Fournier, J.; Haller, D. M.
- Necessary but Insufficient: Why Measurement Invariance Tests Need Online Probing as a Complementary...; 2017; Meitinger, K.
- Nonresponse in Organizational Surveying: Attitudinal Distribution Form and Conditional Response Probabilities...; 2017; Kulas, J. T.; Robinson, D. H.; Kellar, D. Z.; Smith, J. A.
- Theory and Practice in Nonprobability Surveys: Parallels between Causal Inference and Survey Inference...; 2017; Mercer, A. W.; Kreuter, F.; Keeter, S.; Stuart, E. A.
- Is There a Future for Surveys; 2017; Miller, P. V.
- Reducing speeding in web surveys by providing immediate feedback; 2017; Conrad, F.; Tourangeau, R.; Couper, M. P.; Zhang, C.
- Social Desirability and Undesirability Effects on Survey Response latencies; 2017; Andersen, H.; Mayerl, J.
- A Working Example of How to Use Artificial Intelligence To Automate and Transform Surveys Into Customer...; 2017; Neve, S.
- A Case Study on Evaluating the Relevance of Some Rules for Writing Requirements through an Online Survey...; 2017; Warnier, M.; Condamines, A.
- Estimating the Impact of Measurement Differences Introduced by Efforts to Reach a Balanced Response...; 2017; Kappelhof, J. W. S.; De Leeuw, E. D.
- Targeted letters: Effects on sample composition and item non-response; 2017; Bianchi, A.; Biffignandi, S.